In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch. When context switches occur frequently enough the illusion of parallelism is achieved. Even on computers with more than one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than there are CPUs.
Operating systems may adopt one of many different scheduling strategies, which generally fall into the following categories:
The term time-sharing is no longer commonly used, having been replaced by simply multitasking, and by the advent of personal computers and workstations rather than shared interactive systems.
Contents |
In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the CPU would have to stop executing program instructions while the peripheral processed the data. This was deemed very inefficient.
The first computer using a multitasking system was the British Leo III owned by J._Lyons_and_Co.. Several different programs in batch were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running.
Multiprogramming doesn't give any guarantee that a program will run in a timely manner. Indeed, the very first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed.
When computer usage evolved from batch mode to interactive mode, multiprogramming was no longer a suitable approach. Each user wanted to see his program running as if it was the only program in the computer. The use of time sharing made this possible, with the qualification that the computer would not seem as fast to any one user as it really would be if it were running only that user's program.
Early multitasking systems consisted of suites of related applications that voluntarily ceded time to each other. This approach, which was eventually supported by many computer operating systems, is today known as cooperative multitasking. Although it is now rarely used in larger systems, cooperative multitasking was once the scheduling scheme employed by Microsoft Windows (prior to Windows 95 and Windows NT) and Mac OS (prior to Mac OS X) in order to enable the running of multiple applications simultaneously. Windows 9x also used cooperative multitasking, but only for 16-bit legacy applications, much the same way as pre-Leopard PowerPC versions of Mac OS X used it for Classic applications. The network operation system NetWare used cooperative multitasking up to NetWare 6.5. Cooperative multitasking is still used today on RISC OS systems.
Because a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself or cause the whole system to hang. In a server environment, this is a hazard that makes the network brittle and fragile. All software must be evaluated and cleared for use in a test environment before being installed on the main server, or the entire network either slows down or comes to a halt when a program on the server misbehaves.
Despite the difficulty of designing and implementing cooperativly multitasked systems, time-constrained, real-time embeded systems (such as spacecraft) are often implemented using this paradigm. This allows highly reliable, deterministic control of complex real time sequences, for instance, the firing thrusters for deep space course corrections.
Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process.
Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. For example, preemptive multitasking was implemented in the earliest version of Unix [1] in 1969, and is standard in Unix and Unix-like operating systems, including Linux, Solaris and BSD with its derivatives.
At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.
The earliest preemptive multitasking OS available to home users was Sinclair QDOS on the Sinclair QL, released in 1984. The Commodore Amiga 1000 released in 1985 (demonstrated by Debbie Harry and Andy Warhol at its unveiling) made use of a preemptive multitasking kernel that performed the circus act without a net (MMU) while managing a coprocessor that could process 80 instructions per scan line — no other computer on the market could touch it at the time, which was the sole reason the NewTek Video Toaster was developed to make use of its features. Preemptive multitasking was later adopted on the Apple Macintosh by Mac OS 9.x [2] as an additional API, i.e. the application could be programmed to use the preemptive or cooperative model, and all legacy applications were multitasked cooperatively within a single process. Mac OS X, being a Unix-like system, uses preemptive multitasking for all native applications, although Classic applications are multitasked cooperatively in a Mac OS 9 environment that itself is running as an OS X process (and is subject to preemption like any other OS X process).
A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively, and legacy 16-bit Windows 3.x programs are multitasked cooperatively within a single process, although in the NT family it is possible to force a 16-bit application to run as a separate preemptively multitasked process.[3] 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer provide support for legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications.
Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system was coupled with process prioritization to ensure that key activities were given a greater share of available process time.
As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e.g. one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data.
Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are basically processes that run in the same memory context. Threads are described as lightweight because switching between threads does not involve changing the memory context.
While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors.
Some systems directly support multithreading in hardware.
When multiple programs are present in memory, an ill-behaved program may (inadvertently or deliberately) overwrite memory belonging to another program, or even to the operating system itself.
The operating system therefore restricts the memory accessible to the running program. A program trying to access memory outside its allowed range is immediately stopped before it can change memory belonging to another process.
Another key innovation was the idea of privilege levels. Low privilege tasks are not allowed some kinds of memory access and are not allowed to perform certain instructions. When a task tries to perform a privileged operation a trap occurs and a supervisory program running at a higher level is allowed to decide how to respond.
Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage.
Processes that are entirely independent are not much trouble to program. Most of the complexity in multitasking systems comes from the need to share computer resources between tasks and to synchronize the operation of co-operating tasks. Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access the same resource.
Bigger computer systems were sometimes built with a central processor(s) and some number of I/O processors, a kind of asymmetric multi-processing.
Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities.
|
|